-
Notifications
You must be signed in to change notification settings - Fork 1
(CodeQL) Sandboxed URL creation to prevent SSRF attacks #34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| if (!"REPLACE_WITH_ALLOWED_HOST".equalsIgnoreCase(uri.getHost())) { | ||
| throw new IllegalArgumentException("Invalid host"); | ||
| } | ||
| HttpRequest request = requestBuilder.copy().uri(uri).build(); |
Check failure
Code scanning / CodeQL
Server-side request forgery Critical
user-provided value
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 8 months ago
To fix the SSRF vulnerability, we need to validate the user-provided URL against a predefined list of allowed hosts or URL prefixes. This ensures that the server only makes requests to trusted endpoints. The best approach is to maintain a list of allowed hosts in the configuration and validate the uri.getHost() against this list before constructing the HTTP request.
Steps to implement the fix:
- Define a list of allowed hosts as a constant or retrieve it from a configuration file.
- Replace the placeholder
"REPLACE_WITH_ALLOWED_HOST"with logic to check if the host of theURIis in the list of allowed hosts. - Throw an exception if the host is not in the allowed list.
- Ensure that the validation is applied consistently across all methods that use the
fetchFeedfunction.
-
Copy modified lines R239-R241
| @@ -238,4 +238,5 @@ | ||
| URI uri = URI.create(url); | ||
| if (!"REPLACE_WITH_ALLOWED_HOST".equalsIgnoreCase(uri.getHost())) { | ||
| throw new IllegalArgumentException("Invalid host"); | ||
| List<String> allowedHosts = List.of("trustedhost1.com", "trustedhost2.com"); | ||
| if (!allowedHosts.contains(uri.getHost())) { | ||
| throw new IllegalArgumentException("Invalid host: " + uri.getHost()); | ||
| } |
-
Copy modified lines R188-R192
| @@ -187,3 +187,7 @@ | ||
| FeedFetcher fetcher = WebloggerFactory.getWeblogger().getFeedFetcher(); | ||
| sub = fetcher.fetchSubscription(getSubUrl()); | ||
| String url = getSubUrl(); | ||
| if (url == null || url.isEmpty()) { | ||
| throw new IllegalArgumentException("Subscription URL cannot be null or empty"); | ||
| } | ||
| sub = fetcher.fetchSubscription(url); | ||
|
|
Pixee Fix ID: 075b03a4-09ef-44f3-8d2c-d91a2e6a0afc
Confidence: HIGH
Fix confidence is a rating derived from an internal benchmark and includes High, Medium, and Low confidence fixes. It comprises three weighted scores reflecting the safety, effectiveness and cleanliness of Pixee's code changes within a fix. View Details in Pixee.
Remediation
This change fixes findings identified by CodeQL.
Details
This change sandboxes the creation of
java.net.URLobjects so they will be more resistant to Server-Side Request Forgery (SSRF) attacks.Most of the time when you create a URL, you're intending to reference an HTTP endpoint, like an internal microservice. However, URLs can point to local file system files, a Gopher stream in your local network, a JAR file on a remote Internet site, and all kinds of other unexpected and undesirable stuff. When the URL values are influenced by attackers, they can trick your application into fetching internal resources, running malicious code, or otherwise harming the system. Consider the following code:
In this case, an attacker could supply a value like
jar:file:/path/to/appserver/lib.jarand attempt to read the contents of your application's code.Our changes introduce sandboxing around URL creation that force the developers to specify some boundaries on the types of URLs they expect to create:
This change alone reduces attack surface significantly, but can be enhanced to create even more security by specifying some controls around the hosts we expect to connect with:
Note: Beware temptation to write some validation on your own. Parsing URLs is difficult and differences between parsers in validation and execution will certainly lead to exploits as attackers have repeatedly proven.
More reading